47 research outputs found

    Tangible auditory interfaces : combining auditory displays and tangible interfaces

    Get PDF
    Bovermann T. Tangible auditory interfaces : combining auditory displays and tangible interfaces. Bielefeld (Germany): Bielefeld University; 2009.Tangible Auditory Interfaces (TAIs) investigates into the capabilities of the interconnection of Tangible User Interfaces and Auditory Displays. TAIs utilise artificial physical objects as well as soundscapes to represent digital information. The interconnection of the two fields establishes a tight coupling between information and operation that is based on the human's familiarity with the incorporated interrelations. This work gives a formal introduction to TAIs and shows their key features at hand of seven proof of concept applications

    A SUPERCOLLIDER CLASS FOR VOWEL SYNTHESIS AND ITS USE FOR SONIFICATION

    Get PDF
    Presented at the 17th International Conference on Auditory Display (ICAD2011), 20-23 June, 2011 in Budapest, Hungary.In this paper, we present building blocks for the synthesis of vowel sounds in the programming language SuperCollider. We discuss the advantages of using vowel based synthesis, and make a review where it has already been used in sonifications. Then, we describe in detail the main class Vowel which handles all parameters related to the formants that are typically used for vowel synthesis. In order to simplify the handling of the Vowel class, we introduce two auxiliary pseudo Ugens: Formants for additive synthesis, and BPFStack for subtractive synthesis. This introduction of the building blocks is followed by code examples for sound synthesis, which make use of the described classes and their specific features. We finally present sample applications, showing how these building blocks can be used in sonification

    Juggling Sounds

    Get PDF
    Bovermann T, Groten J, deCampo A, Eckel G. Juggling Sounds. In: Proceedings of the 2nd International Workshop on Interactive Sonification. York; 2007.In this paper we describe JUGGLING SOUNDS , a system for realtime auditory monitoring of juggling patterns. We explain different approaches to gain insight into the move- ments, and possible applications in both training and jug- gling performance of single-juggler patterns. Furthermore, we report first impressions and experiences gained in a per- formance and its preparation, which took place in the CUBE at the Institute of Electronic Music (IEM), Graz

    Tutored by:

    No full text
    with Applicatio

    Durcheinander. Understanding Clustering Via Interactive Sonification

    Get PDF
    Presented at the 14th International Conference on Auditory Display (ICAD2008) on June 24-27, 2008 in Paris, France.With Durcheinander we present a system to help understand Agglomerative Clustering processes as they appear in various datamining tasks. Durcheinander consists of a toy dataset represented by several small objects on a tabletop surface. A computer vision systems tracks their position and computes a cluster dendrogram which is sonified every time a substantial change in this dendrogram takes place. Durcheinander may be used to answer questions concerning the behavior of clustering algorithms under various conditions. We propose its usage as a didactical and explorative platform for single- and multi-user operation

    Supplementary Material for "Auditory Augmentation"

    No full text
    Bovermann T, Tünnermann R, Hermann T. Supplementary Material for "Auditory Augmentation". Bielefeld University; 2010.<img src="https://pub.uni-bielefeld.de/download/2763923/2901784" width="200" style="float:right;" > Auditory Augmentations are building blocks supporting the design of data representation tools, which unobtrusively alter the auditory characteristics of structure-borne sounds. The system enriches the structure-borne sound of objects with a sonification of (near) real time data streams. The object’s auditory gestalt is shaped by data.driven parameters, creating a subtle display for ambient data streams. Auditory augmentation can be easily overlaid to existing sounds, and does not change prominent auditory features of the augmented objects like the sound’s timing or its volume. In a peripheral monitoring situation, the data stays out of the users’ attention if they want to concentrate on other items. However, a characteristic change will catch the users’ attention. <video controls="controls" width="100%" height="100%"> <source src="https://pub.uni-bielefeld.de/download/2763923/2763924" type="video/mp4" /> </center

    Auditory Augmentation

    No full text
    Bovermann T, Tünnermann R, Hermann T. Auditory Augmentation. International Journal on Ambient Computing and Intelligence (IJACI). 2010;2(2):27-41.With auditory augmentation, the authors describe building blocks supporting the design of data representation tools, which unobtrusively alter the auditory characteristics of structure-borne sounds. The system enriches the structure-borne sound of objects with a sonification of (near) real time data streams. The object’s auditory gestalt is shaped by data-driven parameters, creating a subtle display for ambient data streams. Auditory augmentation can be easily overlaid to existing sounds, and does not change prominent auditory features of the augmented objects like the sound’s timing or its level. In a peripheral monitoring situation, the data stay out of the users’ attention, which thereby remains free to focus on a primary task. However, any characteristic sound change will catch the users’ attention. This article describes the principles of auditory augmentation, gives an introduction to the Reim Software Toolbox, and presents the first observations made in a preliminary long-term user study

    Supplementary Material for "A SuperCollider Class for Vowel Synthesis and its Use for Sonification"

    No full text
    Grond F, Bovermann T, Hermann T. Supplementary Material for "A SuperCollider Class for Vowel Synthesis and its Use for Sonification". Bielefeld University; 2011.<img src="https://pub.uni-bielefeld.de/download/2702594/2702602" width="300" style="float:right;" > In this paper, we present building blocks for the synthesis of vowel sounds in the programming language SuperCollider. We discuss the advantages of using vowel based synthesis, and make a review where it has already been used in sonifications. Then, we describe in detail the main class Vowel which handles all parameters related to the formants that are typically used for vowel synthesis. In order to simplify the handling of the Vowel class, we introduce two auxiliary pseudo Ugens: Formants for additive synthesis, and BPFStack for subtractive synthesis. This introduction of the building blocks is followed by code examples for sound synthesis, which make use of the described classes and their specific features. We finally present sample applications, showing how these building blocks can be used in sonification

    The local heat exploration model for interactive sonification

    Get PDF
    Presented at the 11th International Conference on Auditory Display (ICAD2005)This paper presents a new sonification model for the exploration of topographically ordered high-dimensional data (multi-parameter maps, volume data) where each data item consists of a position and feature vector. The sonification model implements a common metaphor from thermodynamics that heat can be interpreted as stochastic motion of 'molecules'. The latter are determined by the data under examination, and 'live' only in the feature space. Heat-induced interactions cause acoustic events that fuse to a granular sound texture which conveys meaningful information about the underlying distribution in feature space. As a second ingredient of the model, data selection is achieved by a separated navigation process in position space using a dynamic aura model, such that heat can be induced locally. Both, a visual and an auditory display are driven by the underlying model. We exemplify the sonification by means of interaction examples for different high-dimensional distributions

    Supplementary Material for "The Local Heat Exploration Model for Interactive Sonification"

    No full text
    Bovermann T, Hermann T, Ritter H. Supplementary Material for "The Local Heat Exploration Model for Interactive Sonification". Bielefeld University; 2005.This paper presents a new sonification model for the exploration of topographically ordered high-dimensional data (multi-parameter maps, volume data) where each data item consists of a position and feature vector. The sonification model implements a common metaphor from thermodynamics that heat can be interpreted as stochastic motion of 'molecules'. The latter are determined by the data under examination, and 'live' only in the feature space. Heat-induced interactions cause acoustic events that fuse to a granular sound texture which conveys meaningful information about the underlying distribution in feature space. As a second ingredient of the model, data selection is achieved by a separated navigation process in position space using a dynamic aura model, such that heat can be induced locally. Both, a visual and an auditory display are driven by the underlying model. We exemplify the sonification by means of interaction examples for different high-dimensional distributions. ## Examples In the paper the following sounds are described and discussed: ### Controlled Center Aura Position in Controlled Center Example #### Gaussian data sets Spectrogram of Gauss Example with lambda = 3 [lambda = 3 (mp3, 1.3MB)](https://pub.uni-bielefeld.de/download/2700941/2700943) Spectrogram of Gauss Example with lambda = 30 [lambda = 30 (mp3, 1.3MB)](https://pub.uni-bielefeld.de/download/2700941/2700942) Spectrogram of Gauss Example with lambda = 300 [lambda = 300 (mp3, 1.3MB)](https://pub.uni-bielefeld.de/download/2700941/2700945) #### Hollow Sphere data sets Spectrogram of Sphere Example with lambda = 5 [lambda = 5 (mp3, 1.3MB)](https://pub.uni-bielefeld.de/download/2700941/2700948) Spectrogram of Sphere Example with lambda = 50 [lambda = 50 (mp3, 1.3MB)](https://pub.uni-bielefeld.de/download/2700941/2700947) Spectrogram of Sphere Example with lambda = 100 [lambda = 100 (mp3, 1.3MB)](https://pub.uni-bielefeld.de/download/2700941/2700946) ### Interactive Aura Center in Interactive Example Spectrogram of Interactive Example [Hear it (mp3, 1.3MB)](https://pub.uni-bielefeld.de/download/2700941/2704527
    corecore